23 research outputs found

    Towards pervasive computing in health care – A literature review

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The evolving concepts of pervasive computing, ubiquitous computing and ambient intelligence are increasingly influencing health care and medicine. Summarizing published research, this literature review provides an overview of recent developments and implementations of pervasive computing systems in health care. It also highlights some of the experiences reported in deployment processes.</p> <p>Methods</p> <p>There is no clear definition of pervasive computing in the current literature. Thus specific inclusion criteria for selecting articles about relevant systems were developed. Searches were conducted in four scientific databases alongside manual journal searches for the period of 2002 to 2006. Articles included present prototypes, case studies and pilot studies, clinical trials and systems that are already in routine use.</p> <p>Results</p> <p>The searches identified 69 articles describing 67 different systems. In a quantitative analysis, these systems were categorized into project status, health care settings, user groups, improvement aims, and systems features (i.e., component types, data gathering, data transmission, systems functions). The focus is on the types of systems implemented, their frequency of occurrence and their characteristics. Qualitative analyses were performed of deployment issues, such as organizational and personnel issues, privacy and security issues, and financial issues. This paper provides a comprehensive access to the literature of the emerging field by addressing specific topics of application settings, systems features, and deployment experiences.</p> <p>Conclusion</p> <p>Both an overview and an analysis of the literature on a broad and heterogeneous range of systems are provided. Most systems are described in their prototype stages. Deployment issues, such as implications on organization or personnel, privacy concerns, or financial issues are mentioned rarely, though their solution is regarded as decisive in transferring promising systems to a stage of regular operation. There is a need for further research on the deployment of pervasive computing systems, including clinical studies, economic and social analyses, user studies, etc.</p

    Mögliche Diskriminierung durch algorithmische Entscheidungssysteme und maschinelles Lernen – ein Überblick

    Get PDF
    Algorithmische Entscheidungssysteme (AES), also programmierte Verfahren, die aus einem bestimmten Input in verschiedenen, genau definierten Schrittfolgen einen Output berechnen und eine Entscheidung(sempfehlung) ableiten, begegnen uns im Alltag häufig: Sie bestimmen die beste Strecke für eine geplante Fahrt, den passenden Partner in einer Singlebörse oder die eigene Kreditwürdigkeit. Damit stellen sie mehr oder weniger bedeutsame Weichen und bestimmen somit möglicherweise über Lebenschancen – oft, ohne dass es der oder dem Betroffenen bewusst ist. Aufgrund ihrer zahlenbasierten Regelhaftigkeit könnten AES zunächst als objektivere Entscheidungsinstanzen angenommen werden. Einige medienwirksame Fälle von verzerrten maschinellen Entscheidungen – beispielsweise als ein Onlineversandhändler neue Mitarbeitende suchte und sein lernendes AES fast ausschließlich Männer vorschlug – wecken jedoch Zweifel an der Objektivität algorithmischer Entscheidungsempfehlungen und werfen die Frage auf, ob AES (un)fairere Entscheidungen als Menschen treffen: Verändern sich Diskriminierungsrisiken durch den Einsatz von AES? Dieser Frage geht das TAB in der als Hintergrundpapier Nr. 24 erschienenen Studie nach, die anhand von vier Fallbeispielen aus den Bereichen der Arbeitsvermittlung, der medizinischen Versorgung, des Strafvollzugs und der automatisierten Personenerkennung nachzeichnet, dass Ungleichbehandlungen durch AES häufig Fortführungen »vordigitaler« Ungleichbehandlungen sind, und dabei zugleich verdeutlicht, dass die Frage, ob eine konkrete Ungleichbehandlung diskriminierend ist (oder nicht), innerhalb einer Gesellschaft und innerhalb der Rechtsprechung oftmals hoch umstritten ist. Abschließend stellt das Hintergrundpapier verschiedene Handlungsansätze zur Prävention von algorithmisch basierten Diskriminierungen vor. Inhalt Zusammenfassung 7 1 Einleitung 13 2 Algorithmische Entscheidungssysteme und maschinelles Lernen 17 2.1 Definitionen und Charakteristika von algorithmischen Systemen 17 2.2 Zielstellung und Entwicklung von algorithmischen Entscheidungssystemen 20 2.3 Algorithmen und künstliche Intelligenz in der öffentlichen Wahrnehmung 24 2.4 Mensch-Technik-Interaktionen: Wie gehen Menschen mit algorithmischen Entscheidungsvorschlägen um? 26 3 Ungleichbehandlung und Diskriminierung von Individuen und Gruppen 33 3.1 Definitionen und Charakteristika von sozialer Diskriminierung 33 3.2 Diskriminierung durch algorithmische Entscheidungssysteme 34 3.3 Statistische Ungleichbehandlung und statistische Diskriminierung 36 3.4 Rechtliche Aspekte des Umgangs mit Diskriminierung von Individuen und Gruppen 40 4 Fallbeispiele: Ungleichbehandlung durch AES in verschiedenen Lebensbereichen 49 4.1 Fallbeispiel 1: Ungleichbehandlung in der medizinischen Versorgung durch AES und ML 49 4.2 Fallbeispiel 2: Algorithmus zur Klassifizierung von Arbeitslosen in Österreich 51 4.3 Fallbeispiel 3: COMPAS, ein US-amerikanisches AES im Justizvollzug 54 4.4 Fallbeispiel 4: algorithmische Personenerkennung anhand visueller Daten in den USA 57 4.5 Gemeinsamkeiten und Unterschiede der vier Fallbeispiele 60 5 Handlungsoptionen 65 6 Literatur 75 7 Anhang 83 7.1 Abbildungen 83 7.2 Tabellen 8

    Attitudes towards big data practices and the institutional framework of privacy and data protection - A population survey (KIT Scientific Reports ; 7753)

    Get PDF
    A survey of the German population addressed attitudes towards scenarios of big data practices, i.e. price discrimination in retail, credit scoring, differentiations in health insurance and in employment, with features of using internet data, automated decision-making, and selling of data. The study analysed behavioural adaptations, protection measures, relations to demographics, personal value orientations, knowledge about computers, and attitudes about privacy and data protection

    Tackling problems, harvesting benefits: A systematic review of the regulatory debate around AI

    Get PDF
    How to integrate an emerging and all-pervasive technology such as AI into the structures and operations of our society is a question of contemporary politics, science and public debate. It has produced a considerable amount of international academic literature from different disciplines. This article analyzes the academic debate around the regulation of artificial intelligence (AI). The systematic review comprises a sample of 73 peer-reviewed journal articles published between January 1st, 2016, and December 31st, 2020. The analysis concentrates on societal risks and harms, questions of regulatory responsibility, and possible adequate policy frameworks, including risk-based and principle- based approaches. The main interests are proposed regulatory approaches and instruments. Various forms of interventions such as bans, approvals, standard-setting, and disclosure are presented. The assessments of the included papers indicate the complexity of the field, which shows its prematurity and the remaining lack of clarity. By presenting a structured analysis of the academic debate, we contribute both empirically and conceptually to a better understanding of the nexus of AI and regulation and the underlying normative decisions. A comparison of the scientific proposals with the proposed European AI regulation illustrates the specific approach of the regulation, its strengths and weaknesses

    Digital Rights Management and Consumer Acceptability: A Multi-Disciplinary Discussion of Consumer Concerns and Expectations

    Get PDF
    The INDICARE project – the Informed Dialogue about Consumer Acceptability of DRM Solutions in Europe – has been set up to raise awareness about consumer and user issues of Digital Rights Management (DRM) solutions. One of the main goals of the INDICARE project is to contribute to the consensus-building among multiple players with heterogeneous interests in the digital environment. To promote this process and to contribute to the creation of a common level of understanding is the aim of the present report. It provides an overview of consumer concerns and expectations regarding DRMs, and discusses the findings from a social, legal, technical and business perspective. A general overview of the existing EC initiatives shows that questions of consumer acceptability of DRM have only recently begun to draw wider attention. A review of the relevant statements, studies and reports confirms that awareness of consumer concerns is still at a low level. Five major categories of concerns have been distinguished so far: (1) fair conditions of use and access to digital content, (2) privacy, (3) interoperability, (4) transparency and (5) various aspects of consumer friendliness. From the legal point of view, many of the identified issues go beyond the scope of copyright law, i.e. the field of law where DRM was traditionally discussed. Often they are a matter of general or sector-specific consumer protection law. Furthermore, it is still unclear to what extent technology and an appropriate design of technical solutions can provide an answer to some of the concerns of consumers. One goal of the technical chapter was exactly to highlight some of these technical possibilities. Finally, it is shown that consumer acceptability of DRM is important for the economic success of different business models based on DRM. Fair and responsive DRM design can be a profitable strategy, however DRM-free alternatives do exist too.Digital Rights Management; consumers; Intellectual property; business models
    corecore